195 research outputs found
Learning the Preferences of Ignorant, Inconsistent Agents
An important use of machine learning is to learn what people value. What
posts or photos should a user be shown? Which jobs or activities would a person
find rewarding? In each case, observations of people's past choices can inform
our inferences about their likes and preferences. If we assume that choices are
approximately optimal according to some utility function, we can treat
preference inference as Bayesian inverse planning. That is, given a prior on
utility functions and some observed choices, we invert an optimal
decision-making process to infer a posterior distribution on utility functions.
However, people often deviate from approximate optimality. They have false
beliefs, their planning is sub-optimal, and their choices may be temporally
inconsistent due to hyperbolic discounting and other biases. We demonstrate how
to incorporate these deviations into algorithms for preference inference by
constructing generative models of planning for agents who are subject to false
beliefs and time inconsistency. We explore the inferences these models make
about preferences, beliefs, and biases. We present a behavioral experiment in
which human subjects perform preference inference given the same observations
of choices as our model. Results show that human subjects (like our model)
explain choices in terms of systematic deviations from optimal behavior and
suggest that they take such deviations into account when inferring preferences.Comment: AAAI 201
Why think step-by-step? Reasoning emerges from the locality of experience
Humans have a powerful and mysterious capacity to reason. By working through
a series of purely mental steps, we can make inferences we would not be capable
of making directly -- despite that fact that we get no additional data from the
world. Similarly, large language models can perform better at complex tasks
through chain-of-thought reasoning, where they generate intermediate steps
before answering a question. We use language models to investigate the
questions of when and why reasoning is helpful, testing the hypothesis that
reasoning is effective when training data consisting of local clusters of
variables that influence each other strongly. These training conditions enable
the chaining of accurate local inferences in order to estimate relationships
between variables that were not seen together in training. We train an
autoregressive transformer on samples from joint distributions defined by Bayes
nets, but only include a subset of all the variables in each sample. We compare
language models' ability to match conditional probabilities both with and
without intermediate reasoning steps, finding that intermediate steps help only
when the training data is locally structured with respect to dependencies
between variables. Furthermore, intermediate variables need to be relevant to
the relationship between observed information and target inferences. Our
results illustrate how the statistical structure of training data drives the
effectiveness of reasoning step by step.Comment: 8 pages, 3 figure
- …